{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Learning Bayesian Networks from Data\n", "\n", "\n", "Previous notebooks showed how Bayesian networks economically encode a probability distribution over a set of variables, and how they can be used e.g. to predict variable states, or to generate new samples from the joint distribution. This section will be about obtaining a Bayesian network, given a set of sample data. Learning a Bayesian network can be split into two problems:\n", "\n", " **Parameter learning:** Given a set of data samples and a DAG that captures the dependencies between the variables, estimate the (conditional) probability distributions of the individual variables.\n", " \n", " **Structure learning:** Given a set of data samples, estimate a DAG that captures the dependencies between the variables.\n", " \n", "This notebook aims to illustrate how parameter learning and structure learning can be done with pgmpy.\n", "Currently, the library supports:\n", " - Parameter learning for *discrete* nodes:\n", " - Maximum Likelihood Estimation\n", " - Bayesian Estimation\n", " - Structure learning for *discrete*, *fully observed* networks:\n", " - Score-based structure estimation (BIC/BDeu/K2 score; exhaustive search, hill climb/tabu search)\n", " - Constraint-based structure estimation (PC)\n", " - Hybrid structure estimation (MMHC)\n", "\n", "\n", "## Parameter Learning\n", "\n", "Suppose we have the following data:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " fruit tasty size\n", "0 banana yes large\n", "1 apple no large\n", "2 banana yes large\n", "3 apple yes small\n", "4 banana yes large\n", "5 apple yes large\n", "6 banana yes large\n", "7 apple yes small\n", "8 apple yes large\n", "9 apple yes large\n", "10 banana yes large\n", "11 banana no large\n", "12 apple no small\n", "13 banana no small\n" ] } ], "source": [ "import pandas as pd\n", "data = pd.DataFrame(data={'fruit': [\"banana\", \"apple\", \"banana\", \"apple\", \"banana\",\"apple\", \"banana\", \n", " \"apple\", \"apple\", \"apple\", \"banana\", \"banana\", \"apple\", \"banana\",], \n", " 'tasty': [\"yes\", \"no\", \"yes\", \"yes\", \"yes\", \"yes\", \"yes\", \n", " \"yes\", \"yes\", \"yes\", \"yes\", \"no\", \"no\", \"no\"], \n", " 'size': [\"large\", \"large\", \"large\", \"small\", \"large\", \"large\", \"large\",\n", " \"small\", \"large\", \"large\", \"large\", \"large\", \"small\", \"small\"]})\n", "print(data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We know that the variables relate as follows:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/home/ankur/pgmpy_notebook/notebooks/pgmpy/models/BayesianModel.py:8: FutureWarning: BayesianModel has been renamed to BayesianNetwork. Please use BayesianNetwork class, BayesianModel will be removed in future.\n", " warnings.warn(\n" ] } ], "source": [ "from pgmpy.models import BayesianModel\n", "\n", "model = BayesianModel([('fruit', 'tasty'), ('size', 'tasty')]) # fruit -> tasty <- size" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Parameter learning is the task to estimate the values of the conditional probability distributions (CPDs), for the variables `fruit`, `size`, and `tasty`. \n", "\n", "#### State counts\n", "To make sense of the given data, we can start by counting how often each state of the variable occurs. If the variable is dependent on parents, the counts are done conditionally on the parents states, i.e. for seperately for each parent configuration:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", " fruit\n", "apple 7\n", "banana 7\n", "\n", " fruit apple banana \n", "size large small large small\n", "tasty \n", "no 1.0 1.0 1.0 1.0\n", "yes 3.0 2.0 5.0 0.0\n" ] } ], "source": [ "from pgmpy.estimators import ParameterEstimator\n", "pe = ParameterEstimator(model, data)\n", "print(\"\\n\", pe.state_counts('fruit')) # unconditional\n", "print(\"\\n\", pe.state_counts('tasty')) # conditional on fruit and size" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see, for example, that as many apples as bananas were observed and that `5` large bananas were tasty, while only `1` was not.\n", "\n", "#### Maximum Likelihood Estimation\n", "\n", "A natural estimate for the CPDs is to simply use the *relative frequencies*, with which the variable states have occured. We observed `7 apples` among a total of `14 fruits`, so we might guess that about `50%` of `fruits` are `apples`.\n", "\n", "This approach is *Maximum Likelihood Estimation (MLE)*. According to MLE, we should fill the CPDs in such a way, that $P(\\text{data}|\\text{model})$ is maximal. This is achieved when using the *relative frequencies*. See [1], section 17.1 for an introduction to ML parameter estimation. pgmpy supports MLE as follows:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "+---------------+-----+\n", "| fruit(apple) | 0.5 |\n", "+---------------+-----+\n", "| fruit(banana) | 0.5 |\n", "+---------------+-----+\n", "+------------+--------------+-----+---------------+\n", "| fruit | fruit(apple) | ... | fruit(banana) |\n", "+------------+--------------+-----+---------------+\n", "| size | size(large) | ... | size(small) |\n", "+------------+--------------+-----+---------------+\n", "| tasty(no) | 0.25 | ... | 1.0 |\n", "+------------+--------------+-----+---------------+\n", "| tasty(yes) | 0.75 | ... | 0.0 |\n", "+------------+--------------+-----+---------------+\n" ] } ], "source": [ "from pgmpy.estimators import MaximumLikelihoodEstimator\n", "mle = MaximumLikelihoodEstimator(model, data)\n", "print(mle.estimate_cpd('fruit')) # unconditional\n", "print(mle.estimate_cpd('tasty')) # conditional" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`mle.estimate_cpd(variable)` computes the state counts and divides each cell by the (conditional) sample size. The `mle.get_parameters()`-method returns a list of CPDs for all variable of the model.\n", "\n", "The built-in `fit()`-method of `BayesianModel` provides more convenient access to parameter estimators:\n" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "# Calibrate all CPDs of `model` using MLE:\n", "model.fit(data, estimator=MaximumLikelihoodEstimator)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "While very straightforward, the ML estimator has the problem of *overfitting* to the data. In above CPD, the probability of a large banana being tasty is estimated at `0.833`, because `5` out of `6` observed large bananas were tasty. Fine. But note that the probability of a small banana being tasty is estimated at `0.0`, because we observed only one small banana and it happened to be not tasty. But that should hardly make us certain that small bananas aren't tasty!\n", "We simply do not have enough observations to rely on the observed frequencies. If the observed data is not representative for the underlying distribution, ML estimations will be extremly far off. \n", "\n", "When estimating parameters for Bayesian networks, lack of data is a frequent problem. Even if the total sample size is very large, the fact that state counts are done conditionally for each parents configuration causes immense fragmentation. If a variable has 3 parents that can each take 10 states, then state counts will be done seperately for `10^3 = 1000` parents configurations. This makes MLE very fragile and unstable for learning Bayesian Network parameters. A way to mitigate MLE's overfitting is *Bayesian Parameter Estimation*.\n", "\n", "#### Bayesian Parameter Estimation\n", "\n", "The Bayesian Parameter Estimator starts with already existing prior CPDs, that express our beliefs about the variables *before* the data was observed. Those \"priors\" are then updated, using the state counts from the observed data. See [1], Section 17.3 for a general introduction to Bayesian estimators.\n", "\n", "One can think of the priors as consisting in *pseudo state counts*, that are added to the actual counts before normalization.\n", "Unless one wants to encode specific beliefs about the distributions of the variables, one commonly chooses uniform priors, i.e. ones that deem all states equiprobable.\n", "\n", "A very simple prior is the so-called *K2* prior, which simply adds `1` to the count of every single state.\n", "A somewhat more sensible choice of prior is *BDeu* (Bayesian Dirichlet equivalent uniform prior). For BDeu we need to specify an *equivalent sample size* `N` and then the pseudo-counts are the equivalent of having observed `N` uniform samples of each variable (and each parent configuration). In pgmpy:\n", "\n", "\n" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "+------------+---------------------+-----+---------------------+\n", "| fruit | fruit(apple) | ... | fruit(banana) |\n", "+------------+---------------------+-----+---------------------+\n", "| size | size(large) | ... | size(small) |\n", "+------------+---------------------+-----+---------------------+\n", "| tasty(no) | 0.34615384615384615 | ... | 0.6428571428571429 |\n", "+------------+---------------------+-----+---------------------+\n", "| tasty(yes) | 0.6538461538461539 | ... | 0.35714285714285715 |\n", "+------------+---------------------+-----+---------------------+\n" ] } ], "source": [ "from pgmpy.estimators import BayesianEstimator\n", "est = BayesianEstimator(model, data)\n", "\n", "print(est.estimate_cpd('tasty', prior_type='BDeu', equivalent_sample_size=10))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The estimated values in the CPDs are now more conservative. In particular, the estimate for a small banana being not tasty is now around `0.64` rather than `1.0`. Setting `equivalent_sample_size` to `10` means that for each parent configuration, we add the equivalent of 10 uniform samples (here: `+5` small bananas that are tasty and `+5` that aren't).\n", "\n", "`BayesianEstimator`, too, can be used via the `fit()`-method. Full example:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/home/ankur/pgmpy_notebook/notebooks/pgmpy/models/BayesianModel.py:8: FutureWarning: BayesianModel has been renamed to BayesianNetwork. Please use BayesianNetwork class, BayesianModel will be removed in future.\n", " warnings.warn(\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "+------+----------+\n", "| A(0) | 0.511788 |\n", "+------+----------+\n", "| A(1) | 0.488212 |\n", "+------+----------+\n", "+------+---------------------+---------------------+\n", "| A | A(0) | A(1) |\n", "+------+---------------------+---------------------+\n", "| B(0) | 0.49199687682998244 | 0.5002046245140168 |\n", "+------+---------------------+---------------------+\n", "| B(1) | 0.5080031231700176 | 0.49979537548598324 |\n", "+------+---------------------+---------------------+\n", "+------+--------------------+-----+---------------------+\n", "| A | A(0) | ... | A(1) |\n", "+------+--------------------+-----+---------------------+\n", "| D | D(0) | ... | D(1) |\n", "+------+--------------------+-----+---------------------+\n", "| C(0) | 0.4882005899705015 | ... | 0.5085907138474126 |\n", "+------+--------------------+-----+---------------------+\n", "| C(1) | 0.5117994100294986 | ... | 0.49140928615258744 |\n", "+------+--------------------+-----+---------------------+\n", "+------+--------------------+---------------------+\n", "| B | B(0) | B(1) |\n", "+------+--------------------+---------------------+\n", "| D(0) | 0.5120845921450151 | 0.48414271555996036 |\n", "+------+--------------------+---------------------+\n", "| D(1) | 0.4879154078549849 | 0.5158572844400396 |\n", "+------+--------------------+---------------------+\n" ] } ], "source": [ "import numpy as np\n", "import pandas as pd\n", "from pgmpy.models import BayesianModel\n", "from pgmpy.estimators import BayesianEstimator\n", "\n", "# generate data\n", "data = pd.DataFrame(np.random.randint(low=0, high=2, size=(5000, 4)), columns=['A', 'B', 'C', 'D'])\n", "model = BayesianModel([('A', 'B'), ('A', 'C'), ('D', 'C'), ('B', 'D')])\n", "\n", "model.fit(data, estimator=BayesianEstimator, prior_type=\"BDeu\") # default equivalent_sample_size=5\n", "for cpd in model.get_cpds():\n", " print(cpd)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Structure Learning\n", "\n", "To learn model structure (a DAG) from a data set, there are two broad techniques:\n", "\n", " - score-based structure learning\n", " - constraint-based structure learning\n", "\n", "The combination of both techniques allows further improvement:\n", " - hybrid structure learning\n", "\n", "We briefly discuss all approaches and give examples.\n", "\n", "### Score-based Structure Learning\n", "\n", "\n", "This approach construes model selection as an optimization task. It has two building blocks:\n", "\n", "- A _scoring function_ $s_D\\colon M \\to \\mathbb R$ that maps models to a numerical score, based on how well they fit to a given data set $D$.\n", "- A _search strategy_ to traverse the search space of possible models $M$ and select a model with optimal score.\n", "\n", "\n", "#### Scoring functions\n", "\n", "Commonly used scores to measure the fit between model and data are _Bayesian Dirichlet scores_ such as *BDeu* or *K2* and the _Bayesian Information Criterion_ (BIC, also called MDL). See [1], Section 18.3 for a detailed introduction on scores. As before, BDeu is dependent on an equivalent sample size." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "-13938.353002020234\n", "-14329.194269073454\n", "-14294.390420213556\n", "-20906.432489257266\n", "-20933.26023936978\n", "-20950.47339067585\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/home/ankur/pgmpy_notebook/notebooks/pgmpy/models/BayesianModel.py:8: FutureWarning: BayesianModel has been renamed to BayesianNetwork. Please use BayesianNetwork class, BayesianModel will be removed in future.\n", " warnings.warn(\n" ] } ], "source": [ "import pandas as pd\n", "import numpy as np\n", "from pgmpy.estimators import BDeuScore, K2Score, BicScore\n", "from pgmpy.models import BayesianModel\n", "\n", "# create random data sample with 3 variables, where Z is dependent on X, Y:\n", "data = pd.DataFrame(np.random.randint(0, 4, size=(5000, 2)), columns=list('XY'))\n", "data['Z'] = data['X'] + data['Y']\n", "\n", "bdeu = BDeuScore(data, equivalent_sample_size=5)\n", "k2 = K2Score(data)\n", "bic = BicScore(data)\n", "\n", "model1 = BayesianModel([('X', 'Z'), ('Y', 'Z')]) # X -> Z <- Y\n", "model2 = BayesianModel([('X', 'Z'), ('X', 'Y')]) # Y <- X -> Z\n", "\n", "\n", "print(bdeu.score(model1))\n", "print(k2.score(model1))\n", "print(bic.score(model1))\n", "\n", "print(bdeu.score(model2))\n", "print(k2.score(model2))\n", "print(bic.score(model2))\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "While the scores vary slightly, we can see that the correct `model1` has a much higher score than `model2`.\n", "Importantly, these scores _decompose_, i.e. they can be computed locally for each of the variables given their potential parents, independent of other parts of the network:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "-9282.88160824462\n", "-6993.603560250576\n", "-57.1217389219957\n" ] } ], "source": [ "print(bdeu.local_score('Z', parents=[]))\n", "print(bdeu.local_score('Z', parents=['X']))\n", "print(bdeu.local_score('Z', parents=['X', 'Y']))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Search strategies\n", "The search space of DAGs is super-exponential in the number of variables and the above scoring functions allow for local maxima. The first property makes exhaustive search intractable for all but very small networks, the second prohibits efficient local optimization algorithms to always find the optimal structure. Thus, identifiying the ideal structure is often not tractable. Despite these bad news, heuristic search strategies often yields good results.\n", "\n", "If only few nodes are involved (read: less than 5), `ExhaustiveSearch` can be used to compute the score for every DAG and returns the best-scoring one:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[('X', 'Z'), ('Y', 'Z')]\n", "\n", "All DAGs by score:\n", "-14294.390420213556 [('X', 'Z'), ('Y', 'Z')]\n", "-14330.086974085189 [('X', 'Z'), ('Y', 'Z'), ('Y', 'X')]\n", "-14330.086974085189 [('X', 'Y'), ('X', 'Z'), ('Z', 'Y')]\n", "-14330.08697408519 [('Y', 'X'), ('Z', 'X'), ('Z', 'Y')]\n", "-14330.08697408519 [('Y', 'Z'), ('Y', 'X'), ('Z', 'X')]\n", "-14330.08697408519 [('X', 'Y'), ('Z', 'X'), ('Z', 'Y')]\n", "-14330.08697408519 [('X', 'Y'), ('X', 'Z'), ('Y', 'Z')]\n", "-16586.926723773093 [('Y', 'X'), ('Z', 'X')]\n", "-16587.66791728165 [('X', 'Y'), ('Z', 'Y')]\n", "-18657.937087116316 [('Z', 'X'), ('Z', 'Y')]\n", "-18657.937087116316 [('Y', 'Z'), ('Z', 'X')]\n", "-18657.937087116316 [('X', 'Z'), ('Z', 'Y')]\n", "-20914.776836804216 [('Z', 'X')]\n", "-20914.776836804216 [('X', 'Z')]\n", "-20915.518030312778 [('Z', 'Y')]\n", "-20915.518030312778 [('Y', 'Z')]\n", "-20950.47339067585 [('X', 'Z'), ('Y', 'X')]\n", "-20950.47339067585 [('X', 'Y'), ('Z', 'X')]\n", "-20950.47339067585 [('X', 'Y'), ('X', 'Z')]\n", "-20951.21458418441 [('Y', 'X'), ('Z', 'Y')]\n", "-20951.21458418441 [('Y', 'Z'), ('Y', 'X')]\n", "-20951.21458418441 [('X', 'Y'), ('Y', 'Z')]\n", "-23172.357780000675 []\n", "-23208.05433387231 [('Y', 'X')]\n", "-23208.05433387231 [('X', 'Y')]\n" ] } ], "source": [ "from pgmpy.estimators import ExhaustiveSearch\n", "\n", "es = ExhaustiveSearch(data, scoring_method=bic)\n", "best_model = es.estimate()\n", "print(best_model.edges())\n", "\n", "print(\"\\nAll DAGs by score:\")\n", "for score, dag in reversed(es.all_scores()):\n", " print(score, dag.edges())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Once more nodes are involved, one needs to switch to heuristic search. `HillClimbSearch` implements a greedy local search that starts from the DAG `start` (default: disconnected DAG) and proceeds by iteratively performing single-edge manipulations that maximally increase the score. The search terminates once a local maximum is found.\n", "\n", "\n" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "e1f781a2436e4066ab8e015257068eb8", "version_major": 2, "version_minor": 0 }, "text/plain": [ " 0%| | 0/1000000 [00:00